36 research outputs found

    Sound for Fantasy and Freedom

    Get PDF
    Sound is an integral part of our everyday lives. Sound tells us about physical events in the environ- ment, and we use our voices to share ideas and emotions through sound. When navigating the world on a day-to-day basis, most of us use a balanced mix of stimuli from our eyes, ears and other senses to get along. We do this totally naturally and without effort. In the design of computer game experiences, traditionally, most attention has been given to vision rather than the balanced mix of stimuli from our eyes, ears and other senses most of us use to navigate the world on a day to day basis. The risk is that this emphasis neglects types of interaction with the game needed to create an immersive experience. This chapter summarizes the relationship between sound properties, GameFlow and immersive experience and discusses two projects in which Interactive Institute, Sonic Studio has balanced perceptual stimuli and game mechanics to inspire and create new game concepts that liberate users and their imagination

    Tapping into effective emotional reactions via a user driven audio design tool.

    Get PDF
    A major problem when tackling any audio design problem aimed at conveying important and informative content, is the imposing of the designer’s own emotion, taste and value systems on the finished design choices, rather than reflecting those of the end user. In the past the problem has been routed in the tendency to use passive test subjects in rigid environments. Subjects react to sounds without no means of controlling what they hear. This paper suggests a system for participatory sound design that generates results by activating test subjects and giving them significant control of the sounding experience under test. The audio design tool application described here, the AWESOME (Auditory Work Environment Simulation Machine) Sound Design Tool, sets out to enable the end user to have direct influence on the design process through a simple yet innovative technical applications This web based device allows the end users to make emotive decisions about the kinds of audio signals they find most appropriate for given situations. The results can be used to both generate general knowledge about listening experiences and more importantly, as direct user input in actual sound design processes

    Immersion and Gameplay Experience: A Contingency Framework

    Get PDF
    The nature of the relationship between immersion and gameplay experience is investigated, focusing primarily on the literature related to flow. In particular, this paper proposes that immersion and gameplay experience are conceptually different, but empirically positively related through mechanisms related to flow. Furthermore, this study examines gamers' characteristics to determine the influence between immersion and gameplay experiences. The study involves 48 observations in one game setting. Regression analyses including tests for moderation and simple slope analysis are used to reveal gamers' age, experience, and understanding of the game, which moderate the relationship between immersion and gameplay experience. The results suggest that immersion is more positive for gameplay experience when the gamer lacks experience and understanding of the game as well as when the gamer is relatively older. Implications and recommendations for future research are discussed at length in the paper

    DigiWall - an audio mostly game

    Get PDF
    DigiWall is a hybrid between a climbing wall and a computer game. The climbing grips are equipped with touch sensors and lights. The interface has no computer screen. Instead sound and music are principle drivers of DigiWall interaction models. The gaming experience combines sound and music with physical movement and the sparse visuals of the climbing grips. The DigiWall soundscape carries both verbal and nonverbal information. Verbal information includes instructions on how to play a game, scores, level numbers etc. Non-verbal information is about speed, position, direction, events etc. Many different types of interaction models are possible: competitions, collaboration exercises and aesthetic experiences

    Sound for enhanced experiences in mobile applications

    Get PDF
    When visiting new places you want information about restaurants, shopping, places of historic in- terest etc. Smartphones are perfect tools for de- livering such location-based information, but the risk is that users get absorbed by texts, maps, videos etc. on the device screen and get a second- hand experience of the environment they are vis- iting rather than the sought-after first-hand expe- rience. One problem is that the users’ eyes often are directed to the device screen, rather than to the surrounding environment. Another problem is that interpreting more or less abstract informa- tion on maps, texts, images etc. may take up sig- nificant shares of the users’ overall cognitive re- sources. The work presented here tried to overcome these two problems by studying design for human-computer interaction based on the users’ everyday abilities such as directional hearing and point and sweep gestures. Today’s smartphones know where you are, in what direction you are pointing the device and they have systems for ren- dering spatial audio. These readily available tech- nologies hold the potential to make information more easy to interpret and use, demand less cog- nitive resources and free the users from having to look more or less constantly on a device screen

    Using Sound to Enhance Users’ Experiences of Mobile Applications

    Get PDF
    The latest smartphones with GPS, electronic compass, directional audio, touch screens etc. hold potentials for location based services that are easier to use compared to traditional tools. Rather than interpreting maps, users may focus on their activities and the environment around them. Interfaces may be designed that let users search for information by simply pointing in a direction. Database queries can be created from GPS location and compass direction data. Users can get guidance to locations through pointing gestures, spatial sound and simple graphics. This article describes two studies testing prototypic applications with multimodal user interfaces built on spatial audio, graphics and text. Tests show that users appreciated the applications for their ease of use, for being fun and effective to use and for allowing users to interact directly with the environment rather than with abstractions of the same. The multimodal user interfaces contributed significantly to the overall user experience

    Testing Two Tools for Multimodal Navigation

    Get PDF
    The latest smartphones with GPS, electronic compasses, directional audio, touch screens, and so forth, hold a potential for location-based services that are easier to use and that let users focus on their activities and the environment around them. Rather than interpreting maps, users can search for information by pointing in a direction and database queries can be created from GPS location and compass data. Users can also get guidance to locations through point and sweep gestures, spatial sound, and simple graphics. This paper describes two studies testing two applications with multimodal user interfaces for navigation and information retrieval. The applications allow users to search for information and get navigation support using combinations of point and sweep gestures, nonspeech audio, graphics, and text. Tests show that users appreciated both applications for their ease of use and for allowing users to interact directly with the surrounding environment

    Testing Two Tools for Multimodal Navigation

    Get PDF
    The latest smartphones with GPS, electronic compasses, directional audio, touch screens, and so forth, hold a potential for location-based services that are easier to use and that let users focus on their activities and the environment around them. Rather than interpreting maps, users can search for information by pointing in a direction and database queries can be created from GPS location and compass data. Users can also get guidance to locations through point and sweep gestures, spatial sound, and simple graphics. This paper describes two studies testing two applications with multimodal user interfaces for navigation and information retrieval. The applications allow users to search for information and get navigation support using combinations of point and sweep gestures, nonspeech audio, graphics, and text. Tests show that users appreciated both applications for their ease of use and for allowing users to interact directly with the surrounding environment

    II City Audio Guide

    No full text
    Traditionally, navigating the physical world has been a matter of primarily cognitive efforts – interpreting maps with signs and various levels of abstract visual representations of the physical world. With the advent of new and more powerful mobile technologies for GPS navigation, electronic compasses, possibilities for sound playback and DSP comes new opportunities to design and develop a wide range of applications for navigation. These new applications can potentially encompass and utilize more of man’s skills, such as hearing, sensing, intuition, perceptual and motor skills, emotions etc. More over, these applications can potentially support other modes of navigation than for pure efficiency. For example “stumbling” and “serendipitous discovery” are modes proposed by McGookin et al. When it comes to navigational services for mobile users, safety is an issue. Keeping the eyes firmly an a small screen displaying a map that shows current position and best route to a target destination, can potentially distract the user and put her in hazardous situations. The development of user interfaces that requires minimal attention, that does not distract the user but instead frees for example her eyes while at the same time guiding her in unknown territories is vital. These interfaces must support the user, not distract. In addition to the above mentioned, the project described in this paper also work with what Djajadiningrat et al. calls “aesthetics of interaction” and the “respect for all of man’s skills”

    Awesome - a tool for simulating sound environments

    Get PDF
    Sounds are (almost) always heard and perceived as parts of greater contexts. How we hear a sound depends on things like other sounds present, acoustic properties of the place where the sound is heard, the distance and direction to the sound source etc. Moreover, if the sound bear any meaning to us or not and what the meaning is, if any, depends largely on the listener’s interpretation of the sound, based on memories, previous experiences etc. When working with the design of sounds for all sorts of applications, it is crucial to not only evaluate the sound isolated in the design environment, but to also test the sound in possible greater contexts where it will be used and heard. One way to do this is to sonically simulate one or more environments and use these simulations as contexts to test designed sounds against. In this paper we report on a project in which we have developed a system for simulating the sounding dimension of physical environments. The system consists of a software application, a 5.1 surround sound system and a set of guidelines and methods for use. We also report on a first test of the system and the results from this test
    corecore